Uses:-
1. Used as a dimensionality reduction technique
2. Used in the pre-processing step for pattern classification
3. Has the goal to project a dataset onto a lower-dimensional space
Sounds similar to PCA right?
Breaking it down further:-
The goal of LDA is to project a feature space (a dataset n-dimensional samples) onto a small subspace k (where k <= n-1) while maintaining the class-discriminatory information.
Both PCA and LDA are linear transformation techniques used for dimensional reduction. PCA is described as unsupervised by LDA is supervised because of the relation to the dependent variable.
Compute the d-dimensional
mean vectors for the different classes from the dataset.
Compute the scatter matrices (in-between-classes and within-class scatter matrix).
Compute the eigen vectors (e1, e2, ..., ed
) and corresponding eigen values (ƛ1, ƛ2, ..., ƛd
) for the scatter matrices.
Sort the eigen vectors by decreasing eigen values and choose k
eigen vectors with the largest eigen values to form a d * k
dimensional matrix W
(where every column represents an eigen vector).
Use this d * k
eigen vector matrix to transform the samples onto the new subspace. This can be summarized by the matrix multiplication: Y = X * W
(where X
is a n * d
-dimensional matrix representing the n
samples, and y
are the transformed n * k
-dimensional samples in the new subspace).
«Previous | Next» |